Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 56
Filter
2.
Psychol Sci ; : 9567976231221558, 2023 Dec 27.
Article in English | MEDLINE | ID: mdl-38150595
3.
Psychol Sci ; : 9567976231221573, 2023 Dec 27.
Article in English | MEDLINE | ID: mdl-38150599
4.
F1000Res ; 12: 144, 2023.
Article in English | MEDLINE | ID: mdl-37600907

ABSTRACT

Background: Scientists are increasingly concerned with making their work easy to verify and build upon. Associated practices include sharing data, materials, and analytic scripts, and preregistering protocols. This shift towards increased transparency and rigor has been referred to as a "credibility revolution." The credibility of empirical legal research has been questioned in the past due to its distinctive peer review system and because the legal background of its researchers means that many often are not trained in study design or statistics. Still, there has been no systematic study of transparency and credibility-related characteristics of published empirical legal research. Methods: To fill this gap and provide an estimate of current practices that can be tracked as the field evolves, we assessed 300 empirical articles from highly ranked law journals including both faculty-edited journals and student-edited journals. Results: We found high levels of article accessibility (86%, 95% CI = [82%, 90%]), especially among student-edited journals (100%). Few articles stated that a study's data are available (19%, 95% CI = [15%, 23%]). Statements of preregistration (3%, 95% CI = [1%, 5%]) and availability of analytic scripts (6%, 95% CI = [4%, 9%]) were very uncommon. (i.e., they collected new data using the study's reported methods, but found results inconsistent or not as strong as the original). Conclusion: We suggest that empirical legal researchers and the journals that publish their work cultivate norms and practices to encourage research credibility. Our estimates may be revisited to track the field's progress in the coming years.


Subject(s)
Periodicals as Topic , Humans , Publications , Research Design , Empirical Research , Peer Review
5.
J Pers Soc Psychol ; 125(4): 874-901, 2023 Oct.
Article in English | MEDLINE | ID: mdl-36996169

ABSTRACT

Every research project has limitations. The limitations that authors acknowledge in their articles offer a glimpse into some of the concerns that occupy a field's attention. We examine the types of limitations authors discuss in their published articles by categorizing them according to the four validities framework and investigate whether the field's attention to each of the four validities has shifted from 2010 to 2020. We selected one journal in social and personality psychology (Social Psychological and Personality Science; SPPS), the subfield most in the crosshairs of psychology's replication crisis. We sampled 440 articles (with half of those articles containing a subsection explicitly addressing limitations), and we identified and categorized 831 limitations across the 440 articles. Articles with limitations sections reported more limitations than those without (avg. 2.6 vs. 1.2 limitations per article). Threats to external validity were the most common type of reported limitation (est. 52% of articles), and threats to statistical conclusion validity were the least common (est. 17% of articles). Authors reported slightly more limitations over time. Despite the extensive attention paid to statistical conclusion validity in the scientific discourse throughout psychology's credibility revolution, our results suggest that concerns about statistics-related issues were not reflected in social and personality psychologists' reported limitations. The high prevalence of limitations concerning external validity might suggest it is time that we improve our practices in this area, rather than apologizing for these limitations after the fact. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Personality Disorders , Personality , Humans , Research Design , Psychology
6.
Perspect Psychol Sci ; 18(3): 710-722, 2023 05.
Article in English | MEDLINE | ID: mdl-36301777

ABSTRACT

The replication crisis and credibility revolution in the 2010s brought a wave of doubts about the credibility of social and personality psychology. We argue that as a field, we must reckon with the concerns brought to light during this critical decade. How the field responds to this crisis will reveal our commitment to self-correction. If we do not take the steps necessary to address our problems and simply declare the crisis to be over or the problems to be fixed without evidence, we risk further undermining our credibility. To fully reckon with this crisis, we must empirically assess the state of the field to take stock of how credible our science actually is and whether it is improving. We propose an agenda for metascientific research, and we review approaches to empirically evaluate and track where we are as a field (e.g., analyzing the published literature, surveying researchers). We describe one such project (Surveying the Past and Present State of Published Studies in Social and Personality Psychology) underway in our research group. Empirical evidence about the state of our field is necessary if we are to take self-correction seriously and if we hope to avert future crises.


Subject(s)
Personality , Research Personnel , Humans , Surveys and Questionnaires
7.
R Soc Open Sci ; 9(4): 200048, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35425627

ABSTRACT

What research practices should be considered acceptable? Historically, scientists have set the standards for what constitutes acceptable research practices. However, there is value in considering non-scientists' perspectives, including research participants'. 1873 participants from MTurk and university subject pools were surveyed after their participation in one of eight minimal-risk studies. We asked participants how they would feel if (mostly) common research practices were applied to their data: p-hacking/cherry-picking results, selective reporting of studies, Hypothesizing After Results are Known (HARKing), committing fraud, conducting direct replications, sharing data, sharing methods, and open access publishing. An overwhelming majority of psychology research participants think questionable research practices (e.g. p-hacking, HARKing) are unacceptable (68.3-81.3%), and were supportive of practices to increase transparency and replicability (71.4-80.1%). A surprising number of participants expressed positive or neutral views toward scientific fraud (18.7%), raising concerns about data quality. We grapple with this concern and interpret our results in light of the limitations of our study. Despite the ambiguity in our results, we argue that there is evidence (from our study and others') that researchers may be violating participants' expectations and should be transparent with participants about how their data will be used.

8.
J Pers Soc Psychol ; 122(4): 731-748, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35254856

ABSTRACT

What do people think their best and worst personality traits are? Do their friends agree? Across three samples, 463 college students ("targets") and their friends freely described two traits they most liked and two traits they most disliked about the target. Coders categorized these open-ended trait descriptors into high or low poles of six trait domains (extraversion, agreeableness, conscientiousness, emotional stability, openness, and honesty-humility) and judged whether targets and friends reported the same specific best and worst traits. Best traits almost exclusively reflected high levels of the major trait domains (especially high agreeableness and extraversion). In contrast, although worst traits typically reflected low levels of these traits (especially low emotional stability), they sometimes also revealed the downsides of having high levels of these traits (e.g., high extraversion: "loud"; high agreeableness: "people-pleaser"). Overall, targets and friends mentioned similar kinds of best traits; however, targets emphasized low emotional stability worst traits more than friends did, whereas friends emphasized low prosociality worst traits more than targets did. Targets and friends also showed a moderate amount of self-other agreement on what the targets' best and worst traits were. These results (a) shed light on the traits that people consider to be most important in themselves and their friends, (b) suggest that the desirability of some traits may be in the eye of the beholder, (c) reveal the mixed blessings of different traits, and, ultimately, (d) provide a nuanced perspective on what it means for a trait to be "good" or "bad." (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Extraversion, Psychological , Friends , Emotions , Friends/psychology , Humans , Personality , Personality Disorders
9.
Behav Brain Sci ; 45: e30, 2022 02 10.
Article in English | MEDLINE | ID: mdl-35139952

ABSTRACT

Improvements to the validity of psychological science depend upon more than the actions of individual researchers. Editors, journals, and publishers wield considerable power in shaping the incentives that have ushered in the generalizability crisis. These gatekeepers must raise their standards to ensure authors' claims are supported by evidence. Unless gatekeepers change, changes made by individual scientists will not be sustainable.


Subject(s)
Research Personnel , Humans
10.
Annu Rev Psychol ; 73: 719-748, 2022 01 04.
Article in English | MEDLINE | ID: mdl-34665669

ABSTRACT

Replication-an important, uncommon, and misunderstood practice-is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understandings to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understandings and observed surprising failures to replicate many published findings. Replication efforts highlighted sociocultural challenges such as disincentives to conduct replications and a tendency to frame replication as a personal attack rather than a healthy scientific practice, and they raised awareness that replication contributes to self-correction. Nevertheless, innovation in doing and understanding replication and its cousins, reproducibility and robustness, has positioned psychology to improve research practices and accelerate progress.


Subject(s)
Research Design , Humans , Reproducibility of Results
11.
Proc Natl Acad Sci U S A ; 118(52)2021 12 28.
Article in English | MEDLINE | ID: mdl-34933997

ABSTRACT

While the social sciences have made impressive progress in adopting transparent research practices that facilitate verification, replication, and reuse of materials, the problem of publication bias persists. Bias on the part of peer reviewers and journal editors, as well as the use of outdated research practices by authors, continues to skew literature toward statistically significant effects, many of which may be false positives. To mitigate this bias, we propose a framework to enable authors to report all results efficiently (RARE), with an initial focus on experimental and other prospective empirical social science research that utilizes public study registries. This framework depicts an integrated system that leverages the capacities of existing infrastructure in the form of public registries, institutional review boards, journals, and granting agencies, as well as investigators themselves, to efficiently incentivize full reporting and thereby, improve confidence in social science findings. In addition to increasing access to the results of scientific endeavors, a well-coordinated research ecosystem can prevent scholars from wasting time investigating the same questions in ways that have not worked in the past and reduce wasted funds on the part of granting agencies.

12.
Nat Hum Behav ; 5(12): 1663-1673, 2021 12.
Article in English | MEDLINE | ID: mdl-34811490

ABSTRACT

Self-correction-a key feature distinguishing science from pseudoscience-requires that scientists update their beliefs in light of new evidence. However, people are often reluctant to change their beliefs. We examined belief updating in action by tracking research psychologists' beliefs in psychological effects before and after the completion of four large-scale replication projects. We found that psychologists did update their beliefs; they updated as much as they predicted they would, but not as much as our Bayesian model suggests they should if they trust the results. We found no evidence that psychologists became more critical of replications when it would have preserved their pre-existing beliefs. We also found no evidence that personal investment or lack of expertise discouraged belief updating, but people higher on intellectual humility updated their beliefs slightly more. Overall, our results suggest that replication studies can contribute to self-correction within psychology, but psychologists may underweight their evidentiary value.


Subject(s)
Psychology , Research , Statistics as Topic , Humans
13.
Nat Hum Behav ; 5(12): 1602-1607, 2021 12.
Article in English | MEDLINE | ID: mdl-34711978

ABSTRACT

The replication crisis in the social, behavioural and life sciences has spurred a reform movement aimed at increasing the credibility of scientific studies. Many of these credibility-enhancing reforms focus, appropriately, on specific research and publication practices. A less often mentioned aspect of credibility is the need for intellectual humility or being transparent about and owning the limitations of our work. Although intellectual humility is presented as a widely accepted scientific norm, we argue that current research practice does not incentivize intellectual humility. We provide a set of recommendations on how to increase intellectual humility in research articles and highlight the central role peer reviewers can play in incentivizing authors to foreground the flaws and uncertainty in their work, thus enabling full and transparent evaluation of the validity of research.


Subject(s)
Research , Science , Humans
14.
Nature ; 595(7866): 181-188, 2021 07.
Article in English | MEDLINE | ID: mdl-34194044

ABSTRACT

Computational social science is more than just large repositories of digital data and the computational methods needed to construct and analyse them. It also represents a convergence of different fields with different ways of thinking about and doing science. The goal of this Perspective is to provide some clarity around how these approaches differ from one another and to propose how they might be productively integrated. Towards this end we make two contributions. The first is a schema for thinking about research activities along two dimensions-the extent to which work is explanatory, focusing on identifying and estimating causal effects, and the degree of consideration given to testing predictions of outcomes-and how these two priorities can complement, rather than compete with, one another. Our second contribution is to advocate that computational social scientists devote more attention to combining prediction and explanation, which we call integrative modelling, and to outline some practical suggestions for realizing this goal.


Subject(s)
Computer Simulation , Data Science/methods , Forecasting/methods , Models, Theoretical , Social Sciences/methods , Goals , Humans
15.
Nat Hum Behav ; 5(8): 990-997, 2021 08.
Article in English | MEDLINE | ID: mdl-34168323

ABSTRACT

In registered reports (RRs), initial peer review and in-principle acceptance occur before knowing the research outcomes. This combats publication bias and distinguishes planned from unplanned research. How RRs could improve the credibility of research findings is straightforward, but there is little empirical evidence. Also, there could be unintended costs such as reducing novelty. Here, 353 researchers peer reviewed a pair of papers from 29 published RRs from psychology and neuroscience and 57 non-RR comparison papers. RRs numerically outperformed comparison papers on all 19 criteria (mean difference 0.46, scale range -4 to +4) with effects ranging from RRs being statistically indistinguishable from comparison papers in novelty (0.13, 95% credible interval [-0.24, 0.49]) and creativity (0.22, [-0.14, 0.58]) to sizeable improvements in rigour of methodology (0.99, [0.62, 1.35]) and analysis (0.97, [0.60, 1.34]) and overall paper quality (0.66, [0.30, 1.02]). RRs could improve research quality while reducing publication bias and ultimately improve the credibility of the published literature.


Subject(s)
Peer Review, Research , Registries , Research/standards , Data Analysis , Humans , Neurosciences , Psychology , Research Design/standards , Research Report/standards
16.
Perspect Psychol Sci ; 16(6): 1255-1269, 2021 11.
Article in English | MEDLINE | ID: mdl-33645334

ABSTRACT

Science is often perceived to be a self-correcting enterprise. In principle, the assessment of scientific claims is supposed to proceed in a cumulative fashion, with the reigning theories of the day progressively approximating truth more accurately over time. In practice, however, cumulative self-correction tends to proceed less efficiently than one might naively suppose. Far from evaluating new evidence dispassionately and infallibly, individual scientists often cling stubbornly to prior findings. Here we explore the dynamics of scientific self-correction at an individual rather than collective level. In 13 written statements, researchers from diverse branches of psychology share why and how they have lost confidence in one of their own published findings. We qualitatively characterize these disclosures and explore their implications. A cross-disciplinary survey suggests that such loss-of-confidence sentiments are surprisingly common among members of the broader scientific population yet rarely become part of the public record. We argue that removing barriers to self-correction at the individual level is imperative if the scientific community as a whole is to achieve the ideal of efficient self-correction.


Subject(s)
Publications , Research Personnel , Attitude , Humans , Mental Processes , Writing
17.
PLoS One ; 16(2): e0246675, 2021.
Article in English | MEDLINE | ID: mdl-33621261

ABSTRACT

Academic journals provide a key quality-control mechanism in science. Yet, information asymmetries and conflicts of interests incentivize scientists to deceive journals about the quality of their research. How can honesty be ensured, despite incentives for deception? Here, we address this question by applying the theory of honest signaling to the publication process. Our models demonstrate that several mechanisms can ensure honest journal submission, including differential benefits, differential costs, and costs to resubmitting rejected papers. Without submission costs, scientists benefit from submitting all papers to high-ranking journals, unless papers can only be submitted a limited number of times. Counterintuitively, our analysis implies that inefficiencies in academic publishing (e.g., arbitrary formatting requirements, long review times) can serve a function by disincentivizing scientists from submitting low-quality work to high-ranking journals. Our models provide simple, powerful tools for understanding how to promote honest paper submission in academic publishing.


Subject(s)
Ethics, Research , Peer Review, Research/ethics , Humans , Models, Theoretical , Motivation/ethics , Organizations , Publishing/ethics , Quality Control , Research
18.
Article in English | MEDLINE | ID: mdl-35434719

ABSTRACT

Personality is not the most popular subfield of psychology. But, in one way or another, personality psychologists have played an outsized role in the ongoing "credibility revolution" in psychology. Not only have individual personality psychologists taken on visible roles in the movement, but our field's practices and norms have now become models for other fields to emulate (or, for those who share Baumeister's (2016, https://doi.org/10.1016/j.jesp.2016.02.003) skeptical view of the consequences of increasing rigor, a model for what to avoid). In this article we discuss some unique features of our field that may have placed us in an ideal position to be leaders in this movement. We do so from a subjective perspective, describing our impressions and opinions about possible explanations for personality psychology's disproportionate role in the credibility revolution. We also discuss some ways in which personality psychology remains less-than-optimal, and how we can address these flaws.

19.
Pers Soc Psychol Bull ; 47(11): 1535-1549, 2021 11.
Article in English | MEDLINE | ID: mdl-33342369

ABSTRACT

Participants in experience sampling method (ESM) studies are "beeped" several times per day to report on their momentary experiences-but participants do not always answer the beep. Knowing whether there are systematic predictors of missing a report is critical for understanding the extent to which missing data threatens the validity of inferences from ESM studies. Here, 228 university students completed up to four ESM reports per day while wearing the Electronically Activated Recorder (EAR)-an unobtrusive audio recording device-for a week. These audio recordings provided an alternative source of information about what participants were doing when they missed or completed reports (3,678 observations). We predicted missing ESM reports from 46 variables coded from the EAR recordings, and found very little evidence that missing an ESM report was correlated with constructs typically of interest to ESM researchers. These findings provide reassuring evidence for the validity of ESM research among relatively healthy university student samples.


Subject(s)
Ecological Momentary Assessment , Universities , Humans , Students
SELECTION OF CITATIONS
SEARCH DETAIL
...